10 research outputs found

    OBOE: Collaborative Filtering for AutoML Model Selection

    Full text link
    Algorithm selection and hyperparameter tuning remain two of the most challenging tasks in machine learning. Automated machine learning (AutoML) seeks to automate these tasks to enable widespread use of machine learning by non-experts. This paper introduces OBOE, a collaborative filtering method for time-constrained model selection and hyperparameter tuning. OBOE forms a matrix of the cross-validated errors of a large number of supervised learning models (algorithms together with hyperparameters) on a large number of datasets, and fits a low rank model to learn the low-dimensional feature vectors for the models and datasets that best predict the cross-validated errors. To find promising models for a new dataset, OBOE runs a set of fast but informative algorithms on the new dataset and uses their cross-validated errors to infer the feature vector for the new dataset. OBOE can find good models under constraints on the number of models fit or the total time budget. To this end, this paper develops a new heuristic for active learning in time-constrained matrix completion based on optimal experiment design. Our experiments demonstrate that OBOE delivers state-of-the-art performance faster than competing approaches on a test bed of supervised learning problems. Moreover, the success of the bilinear model used by OBOE suggests that AutoML may be simpler than was previously understood

    Low-Rank Tensor Recovery with Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization

    Full text link
    The nuclear norm and Schatten-pp quasi-norm of a matrix are popular rank proxies in low-rank matrix recovery. Unfortunately, computing the nuclear norm or Schatten-pp quasi-norm of a tensor is NP-hard, which is a pity for low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA). In this paper, we propose a new class of rank regularizers based on the Euclidean norms of the CP component vectors of a tensor and show that these regularizers are monotonic transformations of tensor Schatten-pp quasi-norm. This connection enables us to minimize the Schatten-pp quasi-norm in LRTC and TRPCA implicitly. The methods do not use the singular value decomposition and hence scale to big tensors. Moreover, the methods are not sensitive to the choice of initial rank and provide an arbitrarily sharper rank proxy for low-rank tensor recovery compared to nuclear norm. We provide theoretical guarantees in terms of recovery error for LRTC and TRPCA, which show relatively smaller pp of Schatten-pp quasi-norm leads to tighter error bounds. Experiments using LRTC and TRPCA on synthetic data and natural images verify the effectiveness and superiority of our methods compared to baseline methods

    Automated Machine Learning under Resource Constraints

    Full text link
    186 pagesAutomated machine learning (AutoML) seeks to reduce the human and machine costs of finding machine learning models and hyperparameters with good predictive performance. AutoML is easy with unlimited resources: an exhaustive search across all possible solutions finds the best performing model. This dissertation studies resource-constrained AutoML, in which only limited resources (such as compute or memory) are available for model search. We present a wide variety of strategies for choosing a model under resource constraints, including meta-learning across datasets with low rank matrix and tensor decomposition and experiment design, and efficient neural architecture search (NAS) using weight sharing, reinforcement learning, and Monte Carlo sampling. We propose several AutoML frameworks that realize these ideas, and describe implementations and experimental results

    Lyapunov-Optimized Two-Way Relay Networks With Stochastic Energy Harvesting

    No full text

    High‐Quality Femtosecond Laser Surface Micro/Nano‐Structuring Assisted by A Thin Frost Layer

    No full text
    Abstract Femtosecond laser ablation has been demonstrated to be a versatile tool to produce micro/nanoscale features with high precision and accuracy. However, the use of high laser fluence to increase the ablation efficiency usually results in unwanted effects, such as redeposition of debris, formation of recast layer, and heat‐affected zone in or around the ablation craters. Here this limitation is circumvented by exploiting a thin frost layer with a thickness of tens of microns, which can be directly formed by the condensation of water vapor from the air onto the exposed surface whose temperature is below the freezing point. When the femtosecond laser beam is focused onto the target surface covered with a thin frost layer, only the local frost layer around the laser‐irradiated spot melts into water, helping to boost ablation efficiency, suppress the recast layer, and reduce the heat‐affect zone, while the remaining frost layer can prevent ablation debris from adhering to the target surface. By this frost‐assisted strategy, high‐quality surface micro/nano‐structures are successfully achieved on both plane and curved surfaces at high laser fluences, and the mechanism behind the formation of high‐spatial‐frequency (HSF) laser‐induced periodic surface structures (LIPSSs) on silicon is discussed
    corecore